LLM 25-Day Course - Day 8: Claude Series (Anthropic)

Day 8: Claude Series (Anthropic)

Anthropic was founded by former OpenAI researchers, with “safe AI” as its core value. The Claude series shows strengths in long context processing, instruction following, and safety.

Claude Model Comparison (By Lineup)

Model FamilyFeaturesRecommended Use
Claude OpusHighest quality, complex analysis/reasoningDifficult problem solving, quality-first workflows
Claude SonnetPerformance/cost balanceGeneral services, coding assistance, document summarization
Claude HaikuLow latency/low cost, lightweightBulk classification, real-time responses, simple tasks

Version snapshot names and pricing change frequently, so always check the official model documentation before making API calls.

Constitutional AI

Constitutional AI (CAI), Claude’s core technology, gives AI a “constitution” and has it self-regulate its own behavior.

Traditional RLHF approach:
  Humans directly judge good/bad responses -> High cost, low consistency

Constitutional AI approach:
  1. AI generates a response
  2. AI self-evaluates the response against the constitution (principles)
  3. Self-corrects when violations are found
  4. Retrains on the corrected data

Constitution examples:
  - "Do not provide harmful information"
  - "Do not present biased claims as facts"
  - "Acknowledge uncertainty when something is uncertain"

Anthropic API Basic Usage

# pip install anthropic
import anthropic

client = anthropic.Anthropic(api_key="YOUR_API_KEY")  # Environment variable recommended

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system="You are a friendly AI tutor. Please answer in English.",
    messages=[
        {"role": "user", "content": "Explain the LLM training process using an analogy."},
    ],
)

print(message.content[0].text)
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")

Streaming and Multi-turn Conversation

import anthropic

client = anthropic.Anthropic()

# Streaming response
with client.messages.stream(
    model="claude-sonnet-4-20250514",
    max_tokens=512,
    messages=[
        {"role": "user", "content": "Explain Python decorators."},
    ],
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)
import anthropic

client = anthropic.Anthropic()

# Multi-turn conversation (include previous conversation in messages)
conversation = []

def chat_with_claude(user_message):
    conversation.append({"role": "user", "content": user_message})

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="You are a Python coding mentor.",
        messages=conversation,
    )

    assistant_text = response.content[0].text
    conversation.append({"role": "assistant", "content": assistant_text})
    return assistant_text

# Multi-turn conversation
print(chat_with_claude("What is a generator?"))
print("---")
print(chat_with_claude("Then what's the difference from an iterator?"))

Claude vs GPT Key Differences

ComparisonClaudeGPT
Context200K tokens128K tokens
Safety philosophyConstitutional AIRLHF
Long document processingVery strongStrong
Code generationStrong in latest top-tier Claude modelsStrong in latest top-tier GPT models
Instruction followingPrecise adherence to detailed instructionsFlexible interpretation
MultimodalImage input supportedImage + audio
API formatMessages APIResponses API (recommended), Chat Completions (legacy)

Claude particularly excels in long document analysis, complex instruction execution, and code review.

Today’s Exercises

  1. Obtain an Anthropic API key and ask Claude “What is Python’s GIL?” Evaluate the accuracy and explanation style of the response.
  2. Summarize the differences between Constitutional AI and RLHF, and compare the pros and cons of each approach.
  3. Send the same prompt to both the latest OpenAI model and the latest Claude model, then compare the response style, length, and accuracy. What differences do you notice?

Was this article helpful?